4,149 research outputs found

    Manipulation of Polymorphic Objects Using Two Robotic Arms through CNN Networks

    Get PDF
    This article presents an interaction system for two 5 DOF (Degrees of Freedom) manipulators with 3-finger grippers, which will be used to grab and displace up to 10 polymorphic objects shaped as pentominoes, inside a VRML (Virtual Reality Modeling Language) environment, by performing element detection and classification using an R-CNN (Region Proposal Convolutional Neural Network), and point detection and gripping orientation using a DAG-CNN (Directed Acyclic Graph-Convolutional Neural Network). It was analyzed the feasibility or not of a grasp is determined depending on how the geometry of an element fits the free space between the gripper fingers. A database was created to be used as training data with each of the grasp positions for the polyshapes, so the network training can be focused on finding the desired grasp positions, enabling any other grasp found to be considered a feasible grasp, and eliminating the need to find additional better grasp points, changing the shape, inclination and angle of rotation. Under varying test conditions, the test successfully achieved gripping of each object with one manipulator and passing it to the second manipulator as part of the grouping process, in the opposite end of the work area, using an R-CNN and a DAG-CNN, with an accuracy of 95.5% and 98.8%, respectively, and performing a geometric analysis of the objects to determine the displacement and rotation required by the gripper for each individual grip

    Tool delivery robot using convolutional neural network

    Get PDF
    In the following article, it is presented a human-robot interaction system where algorithms were developed to control the movement of a manipulator in order to allow it to search and deliver, in the hand of the user, a desired tool with a certain orientation. A Convolutional Neural Network (CNN) was used to detect and recognize the user's hand, geometric analysis for the adjustment of the delivery status of the tool from any position of the robot and any orientation of the gripper, and a trajectory planning algorithm for the movement of the manipulator. It was possible to use the activations of a CNN developed in previous works for the detection of the position and orientation of the hand in the workspace and thus track it in real time, both in a simulated environment and in a real environment

    Flexible Gripper, Design and Control for Soft Robotics

    Get PDF
    This paper presents the 3D design of a flexible gripper used for gripping polyform objects that require a certain degree of adaptation of the effector for its manipulation. For this case, the 3D printing of the gripper and its construction is exposed, where a fuzzy controller is implemented for its manipulation. The effector has a flexo resistance that provides information of the deflection of the gripper, this information and the desired grip force are part of the fuzzy controller that seeks to regulate the current of the servomotors that make up the structure of the gripper and are responsible for ensuring the grip. An efficient system is obtained for gripping polyform objects involving deflection of up to 5 mm with a current close to 112 mA

    Classification and Grip of Occluded Objects

    Get PDF
    The present paper exposes a system for detection, classification, and grip of occluded objects by machine vision, artificial intelligence, and an anthropomorphic robot, to generate a solution for the subjection of elements that present occlusions. The deep learning algorithm used is based on Convolutional Neural Networks (CNN), specifically Fast R-CNN (Fast Region-Based CNN) and DAG-CNN (Directed Acyclic Graph CNN) for pattern recognition, the three-dimensional information of the environment was collected through Kinect V1, and tests simulations by the tool VRML. A sequence of detection, classification, and grip was programmed to determine which elements present occlusions and which type of tool generates the occlusion. According to the user's requirements, the desired elements are delivered (occluded or not), and the unwanted elements are removed. It was possible to develop a program with 88.89% accuracy in gripping and delivering occluded objects using networks Fast R-CNN and DAG-CNN with achieving of 70.9% and 96.2% accuracy respectively, detecting elements without occlusions for the first net and classifying the objects into five tools (Scalpel, Scissor, Screwdriver, Spanner, and Pliers), with the second net. The grip of occluded objects requires accurate detection of the element located at the top of the pile of objects to remove it without affecting the rest of the environment. Additionally, the detection process requires that a part of the occluded tool be visible to determine the existence of occlusions in the stac

    Video surveillance for monitoring driver's fatigue and distraction

    Get PDF
    Fatigue and distraction effects in drivers represent a great risk for road safety. For both types of driver behavior problems, image analysis of eyes, mouth and head movements gives valuable information. We present in this paper a system for monitoring fatigue and distraction in drivers by evaluating their performance using image processing. We extract visual features related to nod, yawn, eye closure and opening, and mouth movements to detect fatigue as well as to identify diversion of attention from the road. We achieve an average of 98.3% and 98.8% in terms of sensitivity and specificity for detection of driver's fatigue, and 97.3% and 99.2% for detection of driver's distraction when evaluating four video sequences with different drivers

    Automatic food bio-hazard detection system

    Get PDF
    This paper presents the design of a convolutional neural network architecture oriented to the detection of food waste, to generate a low, medium, or critical-level alarm. An architecture based on four convolution layers is used, for which a database of 100 samples is prepared. The database is used with the different hyperparameters that make up the final architecture, after the training process. By means of confusion matrix analysis, a 100% performance of the network is obtained, which delivers its output to a fuzzy system that, depending on the duration of the detection time, generates the different alarm levels associated with the risk

    Visual control system for grip of glasses oriented to assistance robotics

    Get PDF
    Assistance robotics is presented as a means of improving the quality of life of people with disabilities, an application case is presented in assisted feeding. This paper presents the development of a system based on artificial intelligence techniques, for the grip of a glass, so that it does not slip during its manipulation by means of a robotic arm, as the liquid level varies. A faster R-CNN is used for the detection of the glass and the arm's gripper, and from the data obtained by the network, the mass of the beverage is estimated, and a delta of distance between the gripper and the liquid. These estimated values are used as inputs for a fuzzy system which has as output the torque that the motor that drives the gripper must exert. It was possible to obtain a 97.3% accuracy in the detection of the elements of interest in the environment with the faster R-CNN, and a 76% performance in the grips of the glass through the fuzzy algorithm

    Algorithm of detection, classification and gripping of occluded objects by CNN techniques and Haar classifiers

    Get PDF
    The following paper presents the development of an algorithm, in charge of detecting, classifying and grabbing occluded objects, using artificial intelligence techniques, machine vision for the recognition of the environment, an anthropomorphic manipulator for the manipulation of the elements. 5 types of tools were used for their detection and classification, where the user selects one of them, so that the program searches for it in the work environment and delivers it in a specific area, overcoming difficulties such as occlusions of up to 70%. These tools were classified using two CNN (convolutional neural network) type networks, a fast R-CNN (fast region-based CNN) for the detection and classification of occlusions, and a DAG-CNN (directed acyclic graph-CNN) for the classification tools. Furthermore, a Haar classifier was trained in order to compare its ability to recognize occlusions with respect to the fast R-CNN. Fast R-CNN and DAG-CNN achieved 70.9% and 96.2% accuracy, respectively, Haar classifiers with about 50% accuracy, and an accuracy of grip and delivery of occluded objects of 90% in the application, was achieved

    Object gripping algorithm for robotic assistance by means of deep learning

    Get PDF
    This paper exposes the use of recent deep learning techniques in the state of the art, little addressed in robotic applications, where a new algorithm based on Faster R-CNN and CNN regression is exposed. The machine vision systems implemented, tend to require multiple stages to locate an object and allow a robot to take it, increasing the noise in the system and the processing times. The convolutional networks based on regions allow one to solve this problem, it is used for it two convolutional architectures, one for classification and location of three types of objects and one to determine the grip angle for a robotic gripper. Under the establish virtual environment, the grip algorithm works up to 5 frames per second with a 100% object classification, and with the implementation of the Faster R-CNN, it allows obtain 100% accuracy in the classifications of the test database, and over a 97% of average precision locating the generated boxes in each element, gripping successfully the objects
    • …
    corecore